Do you believe your error bars?
Computing uncertainty estimates from Monte Carlo data usually assumes the data is normal (or can be sufficiently reblocked to be normal, according to the central limit theorem). But what if we don't have quite that much data - what effect would the underlying distribution have on the uncertainty estimates (error bars)?
Mervlyn Moodley looks at this question in the paper The Lognormal Distribution and Quantum Monte Carlo Data.
The interesting figures seem to be figures 3 and 9. In Figure 3, we see the confidence intervals change as the data is reblocked (and hence comes closer to normal). Figure 9 is generated from a real simulation, and it seems there is very little difference between the error bars computed assuming normality and those taking the underlying lognormal distribution into account.